61 research outputs found

    Toward an estimation of the relationship between cyclonic structures and damages at the ground in Europe

    Get PDF
    Cyclonic systems dominate European and Mediterranean meteorology throughout the year and often induce severe weather in terms of heavy and/or long-lasting precipitation with related phenomena such as strong winds and lightning. Surface cyclonic structures are often related to well defined precipitation patterns with different scales, duration and intensity. Cyclones confined in the upper troposphere, usually referred to as cut off low, may induce instability at lower levels and the development of convective precipitation. In this work the occurrence of cyclonic events (discriminated between surface ones and cut-off lows) is analyzed and matched with an economic losses database to highlight a relation between the atmospheric structures and the impact on the social environment in terms of casualties and material damages. The study focus on the continental Europe and, based on the ERA-40 reanalysis, two databases of surface cyclones and cut-off lows have been constructed by means of automatic pattern recognition algorithms. The impact on the local communities is estimated from an insurance company record, which provides the location, date and type of the events, as well as related losses in terms of damages and casualties. Results show the relatively high impact of cyclonic structures on human life in Europe: most of the weather induced damages occur close to a cyclonic center, especially during warm months. Damages and human losses are more frequent from late summer to January, and precipitation is the most relevant meteorological damaging feature throughout the year

    Asymptotic forecast uncertainty and the unstable subspace in the presence of additive model error

    Get PDF
    It is well understood that dynamic instability is among the primary drivers of forecast uncertainty in chaotic, physical systems. Data assimilation techniques have been designed to exploit this phenomenon, reducing the effective dimension of the data assimilation problem to the directions of rapidly growing errors. Recent mathematical work has, moreover, provided formal proofs of the central hypothesis of the assimilation in the unstable subspace methodology of Anna Trevisan and her collaborators: for filters and smoothers in perfect, linear, Gaussian models, the distribution of forecast errors asymptotically conforms to the unstable-neutral subspace. Specifically, the column span of the forecast and posterior error covariances asymptotically align with the span of backward Lyapunov vectors with nonnegative exponents. Earlier mathematical studies have focused on perfect models, and this current work now explores the relationship between dynamical instability, the precision of observations, and the evolution of forecast error in linear models with additive model error. We prove bounds for the asymptotic uncertainty, explicitly relating the rate of dynamical expansion, model precision, and observational accuracy. Formalizing this relationship, we provide a novel, necessary criterion for the boundedness of forecast errors. Furthermore, we numerically explore the relationship between observational design, dynamical instability, and filter boundedness. Additionally, we include a detailed introduction to the multiplicative ergodic theorem and to the theory and construction of Lyapunov vectors. While forecast error in the stable subspace may not generically vanish, we show that even without filtering, uncertainty remains uniformly bounded due its dynamical dissipation. However, the continuous reinjection of uncertainty from model errors may be excited by transient instabilities in the stable modes of high variance, rendering forecast uncertainty impractically large. In the context of ensemble data assimilation, this requires rectifying the rank of the ensemble-based gain to account for the growth of uncertainty beyond the unstable and neutral subspace, additionally correcting stable modes with frequent occurrences of positive local Lyapunov exponents that excite model errors

    Chaotic dynamics and the role of covariance inflation for reduced rank Kalman filters with model error

    Get PDF
    The ensemble Kalman filter and its variants have shown to be robust for data assimilation in high dimensional geophysical models, with localization, using ensembles of extremely small size relative to the model dimension. However, a reduced rank representation of the estimated covariance leaves a large dimensional complementary subspace unfiltered. Utilizing the dynamical properties of the filtration for the backward Lyapunov vectors, this paper explores a previously unexplained mechanism, providing a novel theoretical interpretation for the role of covariance inflation in ensemble-based Kalman filters. Our derivation of the forecast error evolution describes the dynamic upwelling of the unfiltered error from outside of the span of the anomalies into the filtered subspace. Analytical results for linear systems explicitly describe the mechanism for the upwelling, and the associated recursive Riccati equation for the forecast error, while nonlinear approximations are explored numerically

    Linking the anomaly initialization approach to the mapping paradigm: a proof-of-concept study

    Get PDF
    Seasonal-to-decadal predictions are initialized using observations of the present climatic state in full field initialization (FFI). Such model integrations undergo a drift toward the model attractor due to model deficiencies that incur a bias in the model. The anomaly initialization (AI) approach reduces the drift by adding an estimate of the bias onto the observations at the expense of a larger initial error. In this study FFI is associated with the fidelity paradigm, and AI is associated with an instance of the mapping paradigm, in which the initial conditions are mapped onto the imperfect model attractor by adding a fixed error term; the mapped state on the model attractor should correspond to the nature state. Two diagnosis tools assess how well AI conforms to its own paradigm under various circumstances of model error: the degree of approximation of the model attractor is measured by calculating the overlap of the AI initial conditions PDF with the model PDF; and the sensitivity to random error in the initial conditions reveals how well the selected initial conditions on the model attractor correspond to the nature states. As a useful reference, the initial conditions of FFI are subjected to the same analysis. Conducting hindcast experiments using a hierarchy of low-order coupled climate models, it is shown that the initial conditions generated using AI approximate the model attractor only under certain conditions: differences in higher-than-first-order moments between the model and nature PDFs must be negligible. Where such conditions fail, FFI is likely to perform better

    On the numerical integration of the Lorenz-96 model, with scalar additive noise, for benchmark twin experiments

    Get PDF
    Relatively little attention has been given to the impact of discretization error on twin experiments in the stochastic form of the Lorenz-96 equations when the dynamics are fully resolved but random. We study a simple form of the stochastically forced Lorenz-96 equations that is amenable to higher-order time-discretization schemes in order to investigate these effects. We provide numerical benchmarks for the overall discretization error, in the strong and weak sense, for several commonly used integration schemes and compare these methods for biases introduced into ensemble-based statistics and filtering performance. The distinction between strong and weak convergence of the numerical schemes is focused on, highlighting which of the two concepts is relevant based on the problem at hand. Using the above analysis, we suggest a mathematically consistent framework for the treatment of these discretization errors in ensemble forecasting and data assimilation twin experiments for unbiased and computationally efficient benchmark studies. Pursuant to this, we provide a novel derivation of the order 2.0 strong Taylor scheme for numerically generating the truth twin in the stochastically perturbed Lorenz-96 equations

    Impact of rheology on probabilistic forecasts of sea ice trajectories: application for search and rescue operations in the Arctic

    Get PDF
    We present a sensitivity analysis and discuss the probabilistic forecast capabilities of the novel sea ice model neXtSIM used in hindcast mode. The study pertains to the response of the model to the uncertainty on winds using probabilistic forecasts of ice trajectories. neXtSIM is a continuous Lagrangian numerical model that uses an elasto-brittle rheology to simulate the ice response to external forces. The sensitivity analysis is based on a Monte Carlo sampling of 12 members. The response of the model to the uncertainties is evaluated in terms of simulated ice drift distances from their initial positions, and from the mean position of the ensemble, over the mid-term forecast horizon of 10 days. The simulated ice drift is decomposed into advective and diffusive parts that are characterised separately both spatially and temporally and compared to what is obtained with a free-drift model, that is, when the ice rheology does not play any role in the modelled physics of the ice. The seasonal variability of the model sensitivity is presented and shows the role of the ice compactness and rheology in the ice drift response at both local and regional scales in the Arctic. Indeed, the ice drift simulated by neXtSIM in summer is close to the one obtained with the free-drift model, while the more compact and solid ice pack shows a significantly different mechanical and drift behaviour in winter. For the winter period analysed in this study, we also show that, in contrast to the free-drift model, neXtSIM reproduces the sea ice Lagrangian diffusion regimes as found from observed trajectories. The forecast capability of neXtSIM is also evaluated using a large set of real buoy's trajectories and compared to the capability of the free-drift model. We found that neXtSIM performs significantly better in simulating sea ice drift, both in terms of forecast error and as a tool to assist search and rescue operations, although the sources of uncertainties assumed for the present experiment are not sufficient for complete coverage of the observed IABP positions

    Data assimilation as a learning tool to infer ordinary differential equation representations of dynamical models

    Get PDF
    Recent progress in machine learning has shown how to forecast and, to some extent, learn the dynamics of a model from its output, resorting in particular to neural networks and deep learning techniques. We will show how the same goal can be directly achieved using data assimilation techniques without leveraging on machine learning software libraries, with a view to high-dimensional models. The dynamics of a model are learned from its observation and an ordinary differential equation (ODE) representation of this model is inferred using a recursive nonlinear regression. Because the method is embedded in a Bayesian data assimilation framework, it can learn from partial and noisy observations of a state trajectory of the physical model. Moreover, a space-wise local representation of the ODE system is introduced and is key to coping with high-dimensional models. It has recently been suggested that neural network architectures could be interpreted as dynamical systems. Reciprocally, we show that our ODE representations are reminiscent of deep learning architectures. Furthermore, numerical analysis considerations of stability shed light on the assets and limitations of the method. The method is illustrated on several chaotic discrete and continuous models of various dimensions, with or without noisy observations, with the goal of identifying or improving the model dynamics, building a surrogate or reduced model, or producing forecasts solely from observations of the physical model

    Improving weather and climate predictions by training of supermodels

    Get PDF
    Recent studies demonstrate that weather and climate predictions potentially improve by dynamically combining different models into a so-called “supermodel”. Here, we focus on the weighted supermodel – the supermodel's time derivative is a weighted superposition of the time derivatives of the imperfect models, referred to as weighted supermodeling. A crucial step is to train the weights of the supermodel on the basis of historical observations. Here, we apply two different training methods to a supermodel of up to four different versions of the global atmosphere–ocean–land model SPEEDO. The standard version is regarded as truth. The first training method is based on an idea called cross pollination in time (CPT), where models exchange states during the training. The second method is a synchronization-based learning rule, originally developed for parameter estimation. We demonstrate that both training methods yield climate simulations and weather predictions of superior quality as compared to the individual model versions. Supermodel predictions also outperform predictions based on the commonly used multi-model ensemble (MME) mean. Furthermore, we find evidence that negative weights can improve predictions in cases where model errors do not cancel (for instance, all models are warm with respect to the truth). In principle, the proposed training schemes are applicable to state-of-the-art models and historical observations. A prime advantage of the proposed training schemes is that in the present context relatively short training periods suffice to find good solutions. Additional work needs to be done to assess the limitations due to incomplete and noisy data, to combine models that are structurally different (different resolution and state representation, for instance) and to evaluate cases for which the truth falls outside of the model class

    Bayesian inference of chaotic dynamics by merging data assimilation, machine learning and expectation-maximization

    Get PDF
    The reconstruction from observations of high-dimensional chaotic dynamics such as geophysical flows is hampered by (ⅰ) the partial and noisy observations that can realistically be obtained, (ⅱ) the need to learn from long time series of data, and (â…Č) the unstable nature of the dynamics. To achieve such inference from the observations over long time series, it has been suggested to combine data assimilation and machine learning in several ways. We show how to unify these approaches from a Bayesian perspective using expectation-maximization and coordinate descents. In doing so, the model, the state trajectory and model error statistics are estimated all together. Implementations and approximations of these methods are discussed. Finally, we numerically and successfully test the approach on two relevant low-order chaotic models with distinct identifiability
    • 

    corecore